Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Electronics ; 12(11):2496, 2023.
Article in English | ProQuest Central | ID: covidwho-20234583

ABSTRACT

Currently, the volume of sensitive content on the Internet, such as pornography and child pornography, and the amount of time that people spend online (especially children) have led to an increase in the distribution of such content (e.g., images of children being sexually abused, real-time videos of such abuse, grooming activities, etc.). It is therefore essential to have effective IT tools that automate the detection and blocking of this type of material, as manual filtering of huge volumes of data is practically impossible. The goal of this study is to carry out a comprehensive review of different learning strategies for the detection of sensitive content available in the literature, from the most conventional techniques to the most cutting-edge deep learning algorithms, highlighting the strengths and weaknesses of each, as well as the datasets used. The performance and scalability of the different strategies proposed in this work depend on the heterogeneity of the dataset, the feature extraction techniques (hashes, visual, audio, etc.) and the learning algorithms. Finally, new lines of research in sensitive-content detection are presented.

2.
Electronics ; 12(7):1551, 2023.
Article in English | ProQuest Central | ID: covidwho-2296491

ABSTRACT

Lung ultrasound is used to detect various artifacts in the lungs that support the diagnosis of different conditions. There is ongoing research to support the automatic detection of such artifacts using machine learning. We propose a solution that uses analytical computer vision methods to detect two types of lung artifacts, namely A- and B-lines. We evaluate the proposed approach on the POCUS dataset and data acquired from a hospital. We show that by using the Fourier transform, we can analyze lung ultrasound images in real-time and classify videos with an accuracy above 70%. We also evaluate the method's applicability for segmentation, showcasing its high success rate for B-lines (89% accuracy) and its shortcomings for A-line detection. We then propose a hybrid solution that uses a combination of neural networks and analytical methods to increase accuracy in horizontal line detection, emphasizing the pleura.

3.
1st Workshop on NLP for COVID-19 at the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 ; 2020.
Article in English | Scopus | ID: covidwho-2271699

ABSTRACT

We present a simple NLP methodology for detecting COVID-19 misinformation videos on YouTube by leveraging user comments. We use transfer learning pre-trained models to generate a multi-label classifier that can categorize conspiratorial content. We use the percentage of misinformation comments on each video as a new feature for video classification. We show that the inclusion of this feature in simple models yields an accuracy of up to 82.2%. Furthermore, we verify the significance of the feature by performing a Bayesian analysis. Finally, we show that adding the first hundred comments as tf-idf features increases the video classifier accuracy by up to 89.4%. © ACL 2020.All right reserved.

4.
Bioengineering (Basel) ; 10(3)2023 Feb 21.
Article in English | MEDLINE | ID: covidwho-2260315

ABSTRACT

The SARS-CoV-2 pandemic challenged health systems worldwide, thus advocating for practical, quick and highly trustworthy diagnostic instruments to help medical personnel. It features a long incubation period and a high contagion rate, causing bilateral multi-focal interstitial pneumonia, generally growing into acute respiratory distress syndrome (ARDS), causing hundreds of thousands of casualties worldwide. Guidelines for first-line diagnosis of pneumonia suggest Chest X-rays (CXR) for patients exhibiting symptoms. Potential alternatives include Computed Tomography (CT) scans and Lung UltraSound (LUS). Deep learning (DL) has been helpful in diagnosis using CT scans, LUS, and CXR, whereby the former commonly yields more precise results. CXR and CT scans present several drawbacks, including high costs. Radiation-free LUS imaging requires high expertise, and physicians thus underutilise it. LUS demonstrated a strong correlation with CT scans and reliability in pneumonia detection, even in the early stages. Here, we present an LUS video-classification approach based on contemporary DL strategies in close collaboration with Fondazione IRCCS Policlinico San Matteo's Emergency Department (ED) of Pavia. This research addressed SARS-CoV-2 patterns detection, ranked according to three severity scales by operating a trustworthy dataset comprising ultrasounds from linear and convex probes in 5400 clips from 450 hospitalised subjects. The main contributions of this study are related to the adoption of a standardised severity ranking scale to evaluate pneumonia. This evaluation relies on video summarisation through key-frame selection algorithms. Then, we designed and developed a video-classification architecture which emerged as the most promising. In contrast, the literature primarily concentrates on frame-pattern recognition. By using advanced techniques such as transfer learning and data augmentation, we were able to achieve an F1-Score of over 89% across all classes.

5.
2022 IEEE Region 10 International Conference, TENCON 2022 ; 2022-November, 2022.
Article in English | Scopus | ID: covidwho-2192087

ABSTRACT

Young children are at an increased risk of contracting contagious diseases such as COVID-19 due to improper hand hygiene. An autonomous social agent that observes children while handwashing and encourages good hand washing practices could provide an opportunity for handwashing behavior to become a habit. In this article, we present a human action recognition system, which is part of the vision system of a social robot platform, to assist children in developing a correct handwashing technique. A modified convolution neural network (CNN) architecture with Channel Spatial Attention Bilinear Pooling (CSAB) frame, with a VGG-16 architecture as the backbone is trained and validated on an augmented dataset. The modified architecture generalizes well with an accuracy of 90% for the WHO-prescribed handwashing steps even in an unseen environment. Our findings indicate that the approach can recognize even subtle hand movements in the video and can be used for gesture detection and classification in social robotics. © 2022 IEEE.

6.
5th International Symposium on Mobile Internet Security, MobiSec 2021 ; 1544 CCIS:171-194, 2022.
Article in English | Scopus | ID: covidwho-1707553

ABSTRACT

The outbreak of the COVID-19 pandemic has forced worldwide employees to massive use of their mobile devices to access corporate systems. This new scenario has made mobile devices more susceptible to malicious applications, which are yearly developed to conduct several hostile activities. Concerned about this fact, many Deep Learning (DL) based solutions have been proposed, in the last decade, by considering both static and dynamic approaches. However, static solutions are adversely affected by obfuscation techniques and polymorphic applications, while dynamic ones cannot reduce the damages caused during applications execution. To this purpose, the following paper aims to propose a novel approach called API-Streams to minimize damages at Run-time. Therefore, we investigate several Video-Classification tasks through CNN-LSTM Autoencoders (CNN-LSTM-AEs). More precisely, we combine the capability of AEs in finding compact features with the classification abilities of Deep Neural Networks (DNNs), and we show that the proposed approach achieves an average accuracy of 98% in the presence of several unbalanced training datasets. Finally, we use the t-Stochastic Neighbor Embedded (t-SNE) representation technique to investigate the abilities of the employed AE to cluster data into their respective classes by limiting their overlapping. © 2022, Springer Nature Singapore Pte Ltd.

7.
Inform Med Unlocked ; 25: 100687, 2021.
Article in English | MEDLINE | ID: covidwho-1343247

ABSTRACT

There is a crucial need for quick testing and diagnosis of patients during the COVID-19 pandemic. Lung ultrasound is an imaging modality that is cost-effective, widely accessible, and can be used to diagnose acute respiratory distress syndrome in patients with COVID-19. It can be used to find important characteristics in the images, including A-lines, B-lines, consolidation, and pleural effusion, which all inform the clinician in monitoring and diagnosing the disease. With the use of portable ultrasound transducers, lung ultrasound images can be easily acquired, however, the images are often of poor quality. They often require an expert clinician interpretation, which may be time-consuming and is highly subjective. We propose a method for fast and reliable interpretation of lung ultrasound images by use of deep learning, based on the Kinetics-I3D network. Our learned model can classify an entire lung ultrasound scan obtained at point-of-care, without requiring the use of preprocessing or a frame-by-frame analysis. We compare our video classifier against ground truth classification annotations provided by a set of expert radiologists and clinicians, which include A-lines, B-lines, consolidation, and pleural effusion. Our classification method achieves an accuracy of 90% and an average precision score of 95% with the use of 5-fold cross-validation. The results indicate the potential use of automated analysis of portable lung ultrasound images to assist clinicians in screening and diagnosing patients.

SELECTION OF CITATIONS
SEARCH DETAIL